1,168 research outputs found

    Ranked List Loss for Deep Metric Learning

    Full text link
    The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity and dissimilarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or triplets as the model improves. To improve this, ranking-motivated structured losses are proposed recently to incorporate multiple examples and exploit the structured information among them. They converge faster and achieve state-of-the-art performance. In this work, we unveil two limitations of existing ranking-motivated structured losses and propose a novel ranked list loss to solve both of them. First, given a query, only a fraction of data points is incorporated to build the similarity structure. Consequently, some useful examples are ignored and the structure is less informative. To address this, we propose to build a set-based similarity structure by exploiting all instances in the gallery. The learning setting can be interpreted as few-shot retrieval: given a mini-batch, every example is iteratively used as a query, and the rest ones compose the gallery to search, i.e., the support set in few-shot setting. The rest examples are split into a positive set and a negative set. For every mini-batch, the learning objective of ranked list loss is to make the query closer to the positive set than to the negative set by a margin. Second, previous methods aim to pull positive pairs as close as possible in the embedding space. As a result, the intraclass data distribution tends to be extremely compressed. In contrast, we propose to learn a hypersphere for each class in order to preserve useful similarity structure inside it, which functions as regularisation. Extensive experiments demonstrate the superiority of our proposal by comparing with the state-of-the-art methods.Comment: Accepted to T-PAMI. Therefore, to read the offical version, please go to IEEE Xplore. Fine-grained image retrieval task. Our source code is available online: https://github.com/XinshaoAmosWang/Ranked-List-Loss-for-DM

    CGRP and migraine

    Get PDF
    Migraine has been estimated to be the seventh highest cause of disability worldwide, and the third most common disease worldwide after dental caries and tension type headache. However, the use of currently available acute and prophylactic medications to control this condition, such as 5-HT1 agonists (triptans) and beta-blockers, is limited by side effects and efficacy so that alternative and more specific treatments are required. More recently, an improved understanding of the pathophysiology of disease has allowed investigation of new therapeutic targets. The 37 amino acid neuropeptide calcitonin gene-related peptide (CGRP) has been shown to play a crucial role in the trigeminocervical complex pathway for nociception in the head. Studies have demonstrated elevated levels in the external jugular vein during the headache phase of migraine, with reduction following headache resolution. Furthermore, CGRP infusion triggers migraine type headache and subsequent treatment with triptans results in normalization of CGRP levels. This neuropeptide is therefore thought to have a central role in pain modulation as it participates in the neurovascular pathway and contributes to the vasodilation and neurogenic inflammation, which leads to migrainous attacks. Targeting CGRP may provide the ideal therapeutic tool needed for control of this common and debilitating illness. The three studies chosen for this month’s journal club are a small sample of the large amount of research being performed on CGRP. The first investigates whether its measurement can be used to classify migraine. The second and third articles are phase II clinical trials which investigate the use of CGRP antagonists and a monoclonal antibody CGR

    IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude's Variance Matters

    Get PDF
    In this work, we study robust deep learning against abnormal training data from the perspective of example weighting built in empirical loss functions, i.e., gradient magnitude with respect to logits, an angle that is not thoroughly studied so far. Consequently, we have two key findings: (1) Mean Absolute Error (MAE) Does Not Treat Examples Equally. We present new observations and insightful analysis about MAE, which is theoretically proved to be noise-robust. First, we reveal its underfitting problem in practice. Second, we analyse that MAE's noise-robustness is from emphasising on uncertain examples instead of treating training samples equally, as claimed in prior work. (2) The Variance of Gradient Magnitude Matters. We propose an effective and simple solution to enhance MAE's fitting ability while preserving its noise-robustness. Without changing MAE's overall weighting scheme, i.e., what examples get higher weights, we simply change its weighting variance non-linearly so that the impact ratio between two examples are adjusted. Our solution is termed Improved MAE (IMAE). We prove IMAE's effectiveness using extensive experiments: image classification under clean labels, synthetic label noise, and real-world unknown noise. We conclude IMAE is superior to CCE, the most popular loss for training DNNs.Comment: Updated Version. IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude's Variance Matters Code: \url{https://github.com/XinshaoAmosWang/Improving-Mean-Absolute-Error-against-CCE}. Please feel free to contact for discussions or implementation problem

    Materials for near-IR light modulation

    Get PDF

    Sensor scheduling with time, energy and communication constraints

    Get PDF
    In this paper, we present new algorithms and analysis for the linear inverse sensor placement and scheduling problems over multiple time instances with power and communications constraints. The proposed algorithms, which deal directly with minimizing the mean squared error (MSE), are based on the convex relaxation approach to address the binary optimization scheduling problems that are formulated in sensor network scenarios. We propose to balance the energy and communications demands of operating a network of sensors over time while we still guarantee a minimum level of estimation accuracy. We measure this accuracy by the MSE for which we provide average case and lower bounds analyses that hold in general, irrespective of the scheduling algorithm used. We show experimentally how the proposed algorithms perform against state-of-the-art methods previously described in the literature

    Classifying objects in LWIR imagery via CNNs

    Get PDF
    • …
    corecore